![]() Method to determine the complex amplitude of the electromagnetic field associated with a scene (Mach
专利摘要:
Method for determining the complex amplitude of the electromagnetic field associated with a scene, comprising a) capturing by means of a camera a plurality of scene images focused on focus planes arranged at different distances, wherein the camera comprises a focal length lens f and a sensor arranged at a certain distance from the lens in its image space, take at least a pair of images of the plurality of images and determine the cumulative wave front to the conjugate plane in the object space corresponding to the intermediate plane at focus planes of the two images of the pair. (Machine-translation by Google Translate, not legally binding) 公开号:ES2578356A1 申请号:ES201431900 申请日:2014-12-22 公开日:2016-07-26 发明作者:José Manuel Rodriguez Ramos;Juan Manuel TRUJILLO SEVILLA;Juan José FERNÁNDEZ VALDIVIA;Jonas PHILIPP LÜKE 申请人:Universidad de La Laguna; IPC主号:
专利说明:
5 10 fifteen twenty 25 30 35 DESCRIPTION METHOD FOR DETERMINING THE COMPLEX WIDTH OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE OBJECT OF THE INVENTION The present invention relates to a method for determining the complex amplitude of the electromagnetic field associated with a scene. The method of the invention allows the optical reconstruction of the entire scene (module and phase of the electromagnetic field), which allows its subsequent use in various applications, such as obtaining the map of distances of the scene, representation of the scene in 3D stereo mode or integral 3D mode, fully focused scene representation, optically opened at will, or corrected for optical distortion (by refractive index change). The invention is applicable in different technical fields, including computational photography and adaptive optics (astronomical, ophthalmological, microscopy ...). BACKGROUND OF THE INVENTION Until now, to generate a three-dimensional image (stereo or integral) of a scene, the scene has been captured from different points of view. Orth (Orth, A., & Crozier, KB (2013), Light field moment imaging, Optics letters, 38 (15), 2666-2668) generates stereo (non-integral) image from 2 unfocused images, using the method "Light Field Moment Imaging", working in the transformed domain. Park (Park, JH, Lee, SK, Jo, NY, Kim, HJ, Kim, YS, & Lim, HG (2014). Light ray field capture using focal plañe sweeping and its opticsI reconstruction using 3D displays. Optics Express, 22 (21), 25444-25454) proposes a filtered overhead projection algorithm applied to the lightfield so that from unfocused images of the scene they create stereo and integral 3D image. In this case, unfocused images (intensities) are cuts at different angles of lightfíeld in transformed space. Acquire few images 2 5 10 fifteen twenty 25 30 Defocused is most appropriate in low lighting scenarios. However, working in the transformed domain with few unfocused images causes blurring due to the absence of information on certain spatial frequencies. The curvature sensor recovers wavefront phase in pupil from two unfocused images. The geometric sensor proposed by Van Dam and Lañe (Van Dam, MA, & Lañe, RG (2002). Wave-front sensing from defocused images by use of wave-front slopes. Applied optics, 41 (26), 5497-5502) It also recovers the wavefront phase in pupil from two unfocused images. However, the measurement of the wavefront phase in the pupil allows only correcting aberrations in the optical axis. DESCRIPTION OF THE INVENTION The above problems are solved by a method according to claim 1 and a device according to claim 10. The dependent claims define preferred embodiments of the invention. In a first inventive aspect a method is defined to determine the complex amplitude of the electromagnetic field associated with a scene, which comprises the following stages: a) capturing a plurality of images of the scene by a camera, the images being focused on focus planes arranged at a different distance, wherein the camera comprises a focal length lens F and a sensor disposed at a certain distance from the lens in your image space, b) take at least one pair of images from among the plurality of images and determine the accumulated wavefront to the conjugate plane in the object space corresponding to the intermediate plane to the focus planes of the two images of the pair, determining the front of wave W {x, y) as: where {Zp (x, y)} is a set of default polynomials and N the number of polynomials used in development, where the coefficients d¡ are determined by solving the system of equations: image 1 p = o u2 - uVí (X) 2 z U2x (j) ~ Uix (j) 2 z image2 2 '■ y_ 2 2z being the distance between the focus planes of the two images of the pair, being {(ulx (j), u1Y (k)), j, k = 1 ... T] points belonging to the first image of the pair, and {(u2x (D, u2Y (k)), j, k = 1 ... T} points belonging to the second image of the pair, such that 5 for every 1 <j, k <T, is verified where s (j) is a sequence of real numbers of values between 0 and 1, increasing monotone for every 1 <j <T, Ix being the two-dimensional density function that counts the probability of occurrence 10 of a photon and is given in each case by the normalized intensity / (x, y) of the corresponding image of the pair, that is: The present invention allows not only to generate three-dimensional image from unfocused images of the scene taken from a single point of view, but also the phase tomographic distribution of the scene. This means that the electromagnetic field contained in the scene is completely available without using different points of view, as is the case with the lightfield field capture (Lightfield) cameras, with the consequent improvement in the final optical resolution obtained, which in the case of the plenoptic chambers is limited by the diameter of the subapertures associated with each point of image3 Y image4 image5 image6 20 view. 5 10 fifteen twenty 25 30 35 In the context of the invention it will be understood that a plurality of images is a number of images greater than or equal to two. The camera with which the images are captured corresponds to a conventional optical system: it includes a single lens, or lens system, that works at a fixed or variable focal length (interchangeably), and a sensor arranged at a certain distance from the optical system in the Image space . The images captured by the camera are conjugated images in different planes. Each includes focused elements (those elements arranged in the image focus plane) and unfocused elements (those located in front and behind the focus plane). According to the method of the invention, each pair of images of the plurality of images captured, allows the determination of the phase of the accumulated wavefront up to the turbulence layer conjugated to the acquisition position following the classical rules of converging lenses. In such a way that, by subtracting the contribution of each pair of images obtained at different conjugate distances, it is possible to find the value of the turbulence at that distance in terms of phase image, of wavefront phase map. Thus, when the plurality of images includes only two images, the method of the invention allows to obtain the wavefront in a single conjugate distance plane of the pair of associated unfocused images. In a preferred embodiment, the two images of each selected pair of images are taken, respectively, on either side of the focus. In a preferred embodiment, the two images of each selected pair of images are taken, at symmetrical distances on either side of the focus. However, the method of the invention is valid for any pair of unfocused images. The method of the invention is a multi-conjugated tomography method oriented to the turbulence layer {Layer oriented MCAO), based on the use of unfocused images of large objects, instead of using conventional two-phase phase sensors such as the Shack-Hartman or the pyramid . The method of the present invention allows to determine the complex amplitude of the electromagnetic field associated with a scene from the capture of unfocused images 5 10 fifteen twenty 25 30 of the same, acquired even in real time (less than 10ms in the case of working in the visible with atmospheric turbulence, 24 images per second in the case of video, ...), with a single lens and a single point of view , without the camera used for capturing having an array of microlenses in the optical path. In one embodiment the wavefront is given by the expression: N-1N-1 W (x, y) = II dpq ^ pq (Xy) p = 0q = 0 being for every 0 <p, q <N -1. Advantageously, in this embodiment the two-dimensional wavefront developed in function of complex exponentials is recovered, which allows to obtain directly the Cartesian distribution of the horizontal and vertical wavefront phase gradients, and therefore the use of classical integration methods of gradients, such as Fourier transform filter or Hudgin. In one embodiment the accumulated wavefront is determined for a plurality of pairs of images. In one embodiment, the method further comprises determining the phase variation between two planes of the object space as the subtraction of the wave fronts accumulated up to said planes that have been determined. Preferably, the phase variation for a plurality of planes is determined. The present invention allows to obtain in full the electromagnetic field (module and phase), not only intensity, by working the blurred images in the measurement domain (not in the transformed domain), together with a tomographic recovery of the wavefront phase. Advantageously, the results working in the measurement domain are much cleaner than working in the transformed domain, where the absence of information on certain spatial frequencies causes blurring when starting from few unfocused images. Acquiring few unfocused images is most appropriate in low lighting scenarios. 5 10 fifteen twenty 25 30 Compared to state-of-the-art methods that recover the wavefront phase only in the pupil, the present invention has the advantage of tomographically recovering the wavefront phase that best fits the set of unfocused images acquired from a scene . The tomographic measurement of the wavefront phase allows correcting aberrations in the entire field of view of the entrance pupil. In one embodiment, the method further comprises determining, from P selected images from among the plurality of captured images, the lightfield value (L) focused at a distance F to M values other than u, M <P, as the lightfield values that verify the system of equations: In = iLF (n + [(x-ri) / aj], n) = a2F2lj (x), Vj 6 {1 ... P} A VxE {1 ... k} where P is the number of images considered for the determination of the lightfield, F the focal length of the lens, LF the value of the lightfield focused at distance F, ajF the focus distance of the image j and / (x) the intensity of the image j, and where [x] denotes the nearest integer ax, obtaining as a result for each image j, with 1 <j <P, the lightfield LF (x) evaluated in the value of uj resulting from the adjustment, that is, the view of the lightfield corresponding to the value u, with xy being the two-dimensional vectors that determine the position in the sensor and the camera lens, respectively. Although the determination of the lightfield has been described in combination with the procedure for determining the wavefront phase according to the first inventive aspect, the method of determining the lightfield can be carried out in isolation. Thus, in a further inventive aspect a method is presented to determine the lightfield comprising: a) capture a plurality of images of the scene by means of a camera, the images being focused on focus planes arranged at a different distance, wherein the camera comprises a focal length lens F and a sensor arranged at an equal lens distance at its focal length, and b) determine, from P selected images from among the plurality of captured images, the lightfield value (L) focused at a distance F to M values other than u, M <P, as the lightfield values that verify the system of equations: In = iLF (n + [(x-ri) / aj], n) = a2F2lj (x), Vj 6 {1 ... P} A Vx 6 {1 ... k} 5 10 fifteen twenty 25 30 35 where P is the number of images considered for the determination of the lightfield, F the focal length of the lens, LF the value of the lightfield focused at distance F, ajF the focus distance of the image j and / (x) the intensity of the image j, and where [x] denotes the nearest integer ax, obtaining as a result for each image j, with 1 <j <P, the lightfield LF (x) evaluated in the value of uj resulting from the adjustment, that is, the view of the lightfield corresponding to the value u, xy being the two-dimensional vectors that determine the position in the sensor and the camera lens, respectively. In one embodiment, the value of the lightfield is determined by solving the system of equations by least squares, that is, by minimizing the expression: | £ n = iLF (n + O - n) / aj, n) - afF2Ij (x) \ 2. In a second aspect, a device is defined to determine the complex amplitude of the electromagnetic field associated with a scene, which comprises means for capturing images, comprising a focal length lens F and an image sensor arranged parallel to the lens, at a certain distance from the lens in its image space, and processing means configured to carry out step b) of the method according to the first inventive aspect. All features and / or the method steps described herein (including the claims, description and drawings) may be combined in any combination, except for combinations of such mutually exclusive features. DESCRIPTION OF THE DRAWINGS To complement the description that will then be made and in order to help a better understanding of the characteristics of the invention, according to a preferred example of practical implementation thereof, a set is attached as an integral part of said description. of drawings where illustrative and not limiting, the following has been represented: Figures 1 and 2 schematically represent a part of the method of the invention. 5 10 fifteen twenty 25 30 Figure 3 schematically represents the lightfield between the lens and the sensor of a camera. Figures 4 and 5 schematically exemplify a part of the method of the invention. Figure 6 schematically represents the obtaining of the wavefront phase corresponding to different planes. Figures 7 and 8 show image recompositions made in transformed domain and measurement domain, respectively. PREFERRED EMBODIMENT OF THE INVENTION Two-dimensional wavefront reconstruction The method of the invention allows recovering, from two or more unfocused images, the Cartesian distribution of the horizontal and vertical wavefront phase gradients in polynomial bases, which in turn allows the use of any recomposition method of the phase from the gradients, whether zonal methods (Hudgin, ...) or modal. In the case of modal methods, the set of polynomials on which the phase map of the wavefront is developed and adjusted can be chosen according to the need of the problem: Zernike polynomials (coincide with the classical or Seidel optical aberrations), complex exponentials (contain the kernel of the Fourier transform, whose use accelerates the computation), Karhunen-Loeve (without analytical form but constituting base in annular-typical pupil in telescopes), etc. In general, the procedure for restoring the phase map from its development in a set of polynomials Zj (x, y) comprises considering the phase of the wavefront at a point (x, y) as follows: (1) W (x, y) = £ "= o1 dj zj (x, y) where N indicates the number of polynomials used in development. The horizontal and vertical Cartesian gradients, Sx and Sy respectively, correspond to the following partial derivatives of the wavefront: (3) w-i j = 0 W-l sy = ^ w (x-y) = Z difrziix'y) j = 0 dy We assume that a photon travels from a -z plane to a + z plane and we estimate the wavefront at points (x, y) of the intermediate plane. The intensity of the propagated wavefront is represented by a two-dimensional density (PDF) function to count the probability of occurrence of a photon (which we will denote by fxv (x, y)), through the cumulative distribution function (CDF ) corresponding two-dimensional (to which we will denote C (x, y)). The density function verifies: 10 r + 0O * +00 I I fXY (x, y) dxdy = 1 J - CO J - oo We construct the marginal cumulative distribution function in the variable x as: Cx (x) = í fx (s) ds J - oo fx being a marginal density function that is constructed from the density function (fxy) as follows: fifteen n +00 fx (x) = fXY (x, y) dy The property of being a cumulative distribution function in the corresponding variable is retained for the marginal density function. So, n +00 J - OO fx (x) dx = 1 20 Since there are data in the -z and + z planes, corresponding to the two images considered, there are two cumulative distribution functions. We denote Cix to the marginal cumulative distribution function in the -z plane and C2X to the marginal cumulative distribution function in the + z plane. Since we start from the values of fxy in the -z and + z planes, we assume that the data associated with the -z plane are defined by f1Xy and those associated with the + z plane are determined by f2XY: image7 fi xy (x> y) dy, Ixxy (x> y) dy, Y Cix (x) = í flx (s) ds, j - 00 Qx (x) = í Í2xiS) ds, j - 00 We consider an increasing monotonous sequence of real numbers (s (j)) with 1 <j <T, of values between O and 1. That is, O <s (/) <1 for every 1 <j <T. We perform the histogram specification in the marginal cumulative distribution function 10, looking for the counterimage of the values of the cumulative distribution function of the values of s (j). That is, we look for the value u1x {/) that meets: Cix (uIX (j)) = s (J) for every 1 <j <T, and the value u2x (/) that meets: C2x (u2X (j)) = s (J) Thus, for each fixed value of s (j) u- xU) and u2X (j) have been found. Graphically, a search with abscissa scanning of corresponding points was performed, identifying all the ordinates, as schematically represented in Figure 1. What provides more accurate values is now to take a tour from the density function in the two variables for each of these values, looking for, for each value k from 1 to T, the values uiy (k) and u2Y (k) that meet: Uiy (fc) rUlx (j) f “iyw r J - 00 J - fiXY (x, y) dxdy = s (j) s (m) Y U2y (fc) rU2X (J) r “2YW r J - 00 J - f2XY (x, y) dxdy = s (j) s (m) where the functions / 1AT (x, y) and f2xYÍx, y) correspond respectively to the images considered / i (x, y) y / 2 (x, y). Graphically, what is done is to associate each value in the corresponding abscissa, the ordered one that makes the counter images by the cumulative distribution function coincide, as schematically represented in Figure 2. The result is a two-dimensional mesh of points given by {(uÍX (J), u1Y (k)), j, k = 1 ... T) at height -z, and 10 {(u2X (J), u2Y (k)), j, k = 1 .. .T) at height + z, so that for every 1 <j, k <T, the points (t / ix (/), uiy (k)) and (u2x (j), u2Y {k)) are associated with the same value of a ray in the wave front fifteen Directional derivatives of the wavefront at the points of the intermediate plane can be considered given by the expressions: , ^ T1 „(uix (í) + u2X ()) u1Y (/ c) + u2Y (k) u2X (y) - ulx (y) (4) W * 2 '2 ~ Tz ’ for every 1 <j <T, and (5) Wy I fulx (J) + u2X (j) u1Y (k) + u2Y (k) 2 ’2) u2 ~ u y (K) 2 z for every 1 <k <T. twenty Therefore, we can write the system of equations (2) and (3) as: U2X 0) ~ U1X 0) 2 z u2y (.k) ~ uiy0¿) 2 z Zdp¿Zp (x’y) X_u x (j) + u2x (j) U ^ Y (k) + U7Y (k) 2 ’and 2 image8 .. Uix (j) + U .X (i) 2 u-, Y (k) + u Y (k) and 2 or in a simplified way: (6) S = A d where the unknown is the matrix of coefficients d. Equation (6) represents an overdetermined system of equations where there are more equations (2T2) than unknowns (N), 2T2 being the number of pixels (x, y) available. The development coefficients d can be found as the best fit on the plane in the sense of least squares. A preferred way to solve the previous system is the resolution by least squares such as: (7) d = (AtA) '1AtS = A + S Equation (7) can be solved by a multitude of techniques known to the person skilled in the art, depending on whether or not the ATA matrix is singular. 10 In a particular embodiment, the wavefront is developed based on complex exponentials. We will truncate the development in a certain N> 1 so that it can be written in the form N-1N-1 W (x, y) = II dpqZpq (x, y) p = 0q = 0 where (dpq) p, q> 0 is a double indexed family of coefficients, and being (8) 15 for every 0 <p, q <N - 1. At this time, a problem of least squares can be solved with the data obtained since the expression (8), derived with respect to x or y, has N-1N-1 (9) é; mx'y) = IIi ™ QxZpq (X’yI p = 0q = 0 N-1N-1 (10) W (x, y) ^ ^ dpq Zpq (x, y), J p = 0q = 0 y twenty Thus, for each 0 <p, q <N - d 2nip ZVq x, y) = Zpq -, d 2ixiq ~ Q ^ Zpq {x, y) = Zpq -. Evaluating the midpoints, taking into account the expressions (4) and (5) and substituting these values in equations (9) and (10), the overdetermined system is reached: U2X ()) - U1X (/) V V. 2níp r (U1X 0 ') + U2X (/) u1Y (/ c) + u2Y (/ c) 2z ~ LL pq N vq 2 '2 p = 0 q = 0 x N — l N — 1 u2K (/ c) - u1K (/ c) y y 27rtq ^ fulx (j) + u2X (j) u1Y (k) + u2K (/ c) 2z - LLávq N ¿vq 2 '2 p = 0 q = 0 v with / V2 unknowns and 272 equations. The value of T is given by the data, which is considered to be much greater than the number of addends in the development of the phase in terms of exponentials. In this case the development coefficients can be obtained from the expression: -2 [i / n) DF {Sx} + ism (Kq / N) DF {Sy}] 4 [s ^ r / w) + yes „^%)] where DF denotes the discrete Fourier transform. Tomographic image restoration 10 The method of the invention provides a two-dimensional restoration of the wavefront phase from the unfocused images. The wavefront phase obtained corresponds to the accumulated offsets to the conjugate position in the object space. That is, if two unfocused images are taken so far from the focus of the lens that they almost correspond to images taken in the pupil (or with very little separation of the entrance pupil of the optical system), the accumulated phase would be obtained throughout the field of vision of the scene until the target is reached. As the pair of unfocused images used approach the focus, the conjugate plane in the object space will correspond to a plane farther from the entrance pupil, and will describe the phase 20 accumulated in the scene up to that plane. The difference between both accumulated phases provides the phase variation present between the farthest plane and the pupil plane of the optical system. Therefore, the greater the number of defocused images used, the more complete the discretization of the object space and the tomographic distribution obtained from the wavefront phase. This tomographic distribution of the wavefront phase will have the original optical resolution 5 10 fifteen twenty 25 30 35 three-dimensional associated to the capture sensor, and the three-dimensional resolution (in optical axis z) that the number of images used allows. It is important to note that the three-dimensional resolution does not strictly coincide with the number of planes or unfocused images acquired, since it is possible to consider any pair of acquisition planes to obtain a subdiscretization of accumulated wavefront phases, as schematically represented in Figure 6 . With the planes the e lb is the accumulated phase l / l / i (x, y) to the pupil With the 'e lb' is the accumulated phase W2 (x, y). The difference between W2 and W-i provides the phase in the section indicated by the key. Using more planes (more captured images), we increase the z-axis resolution of the phase, and obtain a three-dimensional map of the wavefront phase. The method of the present invention has application in any technical field in which it is required to know the wavefront associated with the observation of a scene, including computational photography and adaptive optics, in particular in astronomical observation applications to obtain the map. three-dimensional turbulence (wavefront phases) associated with a column of the atmosphere, in applications where it is necessary to correct vision through turbulent means (for example in mobiles, microscopes, endoscopes or augmented reality glasses), in Applications for tomographic measurement of refractive index variations in transparent organic tissue samples or in optical communications applications through turbulent means (atmosphere, ocean, body fluids ...). Image intensity recomposition The lightfield L is a four-dimensional representation of the light rays that pass through the lens of a camera. For simplicity, a simplified two-dimensional notation will be used. Thus, LF {x, u) represents the ray that crosses the main lens of the camera at the position u = (u1, u2) and reaches the sensor at the position x = (xi, x2) for a focal length camera F , as depicted in figure 3. There is, therefore, a 4-dimensional volume that represents all the rays that enter the chamber and their positions of arrival at the sensor. Ng (Ng, R., Fourierslice photography, In ACM Transactions on Graphics (TOG), Vol. 24, No. 3, pp. 735-744, ACM, 2005, July) demonstrates that the image that would be projected onto the sensor if this one was at fifteen 5 10 fifteen twenty 25 30 distance aF would correspond to a 2-dimensional projection of the lightfield at an angle 6 = tan-1 (Va): image9 as schematically represented in figure 4. The method of the invention is based on interpreting (x) as a sum of images at different values or displaced from each other, as schematically represented in Figure 5, and on estimating images at different values u, finding which set of images displaced due to a value a 'and added together they are closer to the input image captured with a focusing distance Fa'. Thus the displacement in the x dimension (in pixels) is u + (x-u) / a ’. The method comprises estimating the value of the distance-focused lightfield F (LF) at M values other than ua from P images (/ i (x), /2(x).../P(x)) focused at ctiF distances , a2F ... aPF and captured with a conventional camera. For this, the values of the lightfíeld are searched so that it is fulfilled: M Lp (n + [(x - n) / aj , n) = afF2Ij (x), V / 6 (1 ... P} A Vx 6 (1 ... k} n = i The above expression can be represented in a simple way by a linear system of equations of the type Ax = b. This system can be solved by finding x such that minimize | ^ 4x-¿| 2. So far, images of a single channel have been assumed. In the case of color images (several channels) it is enough to generate once the matrix A. Subsequently a new vector b is created containing the information of the images in the channel to be resolved. The method for recomposing the intensity of the image according to the invention allows the generation of a single fully focused image with all the optical resolution ("all-in-focus"), generation of the stereo pair all-in focus, multi-stereo image generation (lightifield) all-in-focus and Lightfield generation focused at ease where desired, with applications in microscopy, photography, endoscopy, cinema, etc. Example: We will assume two images of 8 elements I1 (x) and I2 (x) focused at distances a1 = 2 and a2 = 4, with F = 1 m. The sum in this case is with indices from n = 1 to n = 2. The equations for j = 1 are: And for j = 2 Lf Lf Lf Lf Lf Lf Lf 1 + 1 + 1+ 1+ 1+ 1+ 1+ 1-1 2 twenty-one 2 3 - 1 ,one ,one ,one 8-1 ,one 1-1 4 2-1 4 3-1 4 ,one ,one ,one + Lp + Lp + Lp + Lp + Lp + Lp + Lp 2+ 2+ 2+ 2+ 2+ 2+ 2+ 1-2 2 2-2 2 3-2 ,2 ,2 ,2 8-2 ,2 1-2 4 2-2 4 3-2 4 ,2 ,2 ,2 22 / i (1) 22 / i (2) 22 / i (3) 22 / i (8) 42/2 (1) 42/2 (2) 42/2 (3) 5 Developing: 1+ 8-1 4 , 1) + Lf (2 + [8-2l ,2 42/2 (8) Lf (1.1) + Lf (2.2) = 22/1 (1) Lf (2.1) + Lf (2.2) = 22 / í (2) Lf (2.1) + Lf (3 , 2) = 22 / í (3) Lf (3.1) + Lf (3.2) = 22 / í (4) Lf (3.1) + Lf (4.2) = 22/1 (5) Lf (4.1) + Lf (4.2) = 22/1 (6) Lf (4.1) + Lf (5.2) = 22 / í (7) Lf (5.1) + Lf (5 , 2) = 22 / í (8) Lf (1.1) + Lf (2.2) = 42/2 (1) Lf (1.1) + Lf (2.2) = 42/2 (2) LF (2.1) + LF (2.2) = 42/2 (3) LF (2.1) + LF (3.2) = 42/2 (4) LF (2.1) + LF (3 , 2) = 42/2 (5) LF (2.1) + LF (3.2) = 42/2 (6) LF (3.1) + LF (3.2) = 42/2 (7) LF (3.1) + LF (4.2) = 42/2 (8) In matrix form: one 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 ■ LF (i, i and 22 / i (1) 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 ¿f (2.1) 22 / i (2) 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 Lf (3.1) 22 / i (3) 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 ¿f (4.1) 22 / i (4) 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 Lf (5.1) 22 / i (5) 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 i * (6.1) 22 / i (6) 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 ¿f (7.1) 22 / i (7) 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 i * (8.1) 22 / i (8) 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 ¿f (1,2) 42/2 (1) 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Lf (2.2) 42/2 (2) 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Lf (3.2) 42/2 (3) one one 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ¿f (4.2) 42/2 (4) one one 0 0 0 0 0 0 0 0 1 0 0 0 0 0 i * (5.2) 42/2 (5) 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 i * (6.2) 42/2 (6) 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 ¿f (7.2) 42/2 (7) 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0- i * (8.2) - -42/2 (8) - The previous system resolution provides the values of the LF lightfield. The values of the 5 lightfield that are not defined in any equation in the previous system take the value 0. Figure 7 shows the image recomposition of a scene made in transformed domain, according to a prior art procedure. Figure 8 shows the image recomposition of the same scene performed in the measurement domain, using the method of the present invention to obtain the lightfield from unfocused images. Although the images in Figures 7 and 8 are not normalized to the same signal strength value, it can be seen that the recomposition made in the measurement domain is cleaner, more contrasted, at the edges of the resolution test figures. The area marked with a box and enlarged for better appreciation perfectly illustrates the difference in quality between both recoveries.
权利要求:
Claims (8) [1] 10 fifteen 1.- Method to determine the complex amplitude of the electromagnetic field associated with a scene, which comprises the following stages: a) capture a plurality of images of the scene by a camera, the images being focused on focus planes arranged at a different distance, wherein the camera comprises a lens or lens system of focal length F and a sensor arranged at a certain distance of the lens in the image space, b) take at least one pair of images of the plurality of images and determine the accumulated wavefront to the conjugate plane in the object space corresponding to the intermediate plane to the focus planes of the two images of the pair, determining the wavefront W (x, y) as: N-1 W (x, y) = ^ dp Zp (x, y) p = o where {zp (x, y)} is a set of default polynomials and N the number of polynomials used in development, where dj coefficients are determined by solving the system of equations: U2x (j) ~ U1X 0) 2 2 z n — 1 Zdp ^ Zp (x’y) p = 0 Uix (J) + u .X (jl ■ * - 2 'and ~ Miv (fc) + u Y (fc) 2 u2 y (^) ui y (.te) 2 z image 1 ■ ■ Ui xü) + u2xü) .. Uiv (k) + u7Y (k) X- 2, y- 2 2z being the distance between the focus planes of the two images of the pair, being {(uix (j), u1Y (k)), j, k = 1 ... T} points belonging to the first image of the pair, and 20 {(u2x (j), u2Y (k)), j, k = 1 ... T} points belonging to the second image of the pair, such that for every 1 <j, k <T, it is verified rUiy (k) rUlx 01 fixYÍx, y) dxdy = s (j) s (k) Y U2y (fc) rU2XU) ru2Y (K.J r j - 00 j- f2xy (.x, y) dxdy = s (J) s (k) where s (j) is a sequence of real numbers of values between 0 and 1, increasing monotone for every 1 <and <7, where fxv is the two-dimensional density function that counts the probability of occurrence of a photon and is given in each case by the normalized intensity / (x, y) of the corresponding image of the pair, that is: rU1Y (k) r ^ ixU) ’- oo‘ 1 I1 (x, y) dxdy = s (j) s (k) '—00 RU and (k) ru2x 0) í j - OO J h (x, y) dxdy = s (y ') s (/ c). - 00 5 [2] 2. Method according to claim 1, wherein the wavefront is given by the expression: N-1N-1 W (x, y) = II dpqZpq (x, y) p = 0q = 0 being Zpq (x, y) = —e n 1 2ni. . - (, -frtpx + qy) for every 0 <p, q <N - '. 3. Method according to any of the preceding claims, which comprises determining the accumulated wavefront for a plurality of pairs of images. 4 - Method according to claim 3, which comprises determining the phase variation between two planes of the object space as the subtraction of the wave fronts accumulated up to said 15 planes. [5] 5. Method according to claim 4, which comprises determining the phase variation for a plurality of object planes. Method according to any of the preceding claims, which comprises further: determine, from P selected images from the plurality of captured images, the value of the lightfield (L) focused at a distance F to M values other than u, M <P, as the lightfield values that verify the system of equations: 25 X "= 1LF (n + [(x - ri) / (Xj], n) = afF2Ij (x), V / £ {1 ... P} A Vx 6 {1 ... / c} where P is the number of images considered for the determination of the lightfield, F the focal length of the lens, LF the value of the lightfield focused at distance F, O and F the distance of 5 10 fifteen twenty 25 30 focus of the image j y! {x) the intensity of the image j, and where [x] denotes the integer closest to x, obtaining as a result for each image j, with 1 <j <P, the lightfield LF (x) evaluated in the value of u resulting from the adjustment, that is, the view of the lightfield corresponding to the value coughing xy the two-dimensional vectors that determine the position in the sensor and the camera lens, respectively. [7] 7. - Method according to claim 6, wherein the value of the lightfield is determined by solving the system of equations by least squares, that is, minimizing: | En = i¿F (n + (x - n) /a.j,n) - afF2Jj (x) \ 2. [8] 8. - Method according to any of the preceding claims, wherein the two images of each pair of selected images are taken, respectively, on either side of the focus. [9] 9. - Method according to the preceding claim, in which the two images of each pair of selected images are taken at symmetrical distances on either side of the focus. [10] 10. - Device for determining the amplitude of the electromagnetic field associated with a scene, which comprises means for capturing images, comprising a focal length lens F and an image sensor arranged parallel to the lens, at a certain distance from the lens in its image space and processing means configured to carry out step b) of the method according to claim 1. [11] 11. - Device according to claim 10, wherein the processing means are additionally configured to carry out the actions defined in any of claims 2 to 9.
类似技术:
公开号 | 公开日 | 专利标题 US8305485B2|2012-11-06|Digital camera with coded aperture rangefinder ES2868398T3|2021-10-21|Depth system and procedure from blur images DK2239706T3|2014-03-03|A method for real-time camera and obtaining visual information of three-dimensional scenes US8432479B2|2013-04-30|Range measurement using a zoom camera BR112014028811B1|2020-11-17|imaging system, method for registering an image of an object in a spectral band of terahertz radiation through radiography and visualization of the image in visible light, and storage medium US20180224552A1|2018-08-09|Compressed-sensing ultrafast photography | KR20090107536A|2009-10-13|Method and apparatus for quantitative 3-d imaging CN106170086B|2019-03-15|Method and device thereof, the system of drawing three-dimensional image CN108364342B|2021-06-18|Light field microscopic system and three-dimensional information reconstruction method and device thereof ES2578356B1|2017-08-04|METHOD FOR DETERMINING THE COMPLEX WIDTH OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE Rodríguez-Ramos et al.2008|Wavefront and distance measurement using the CAFADIS camera JP6862569B2|2021-04-21|Virtual ray tracing method and dynamic refocus display system for light field Wei et al.2012|Depth measurement using single camera with fixed camera parameters EP3208773B1|2018-06-13|Disparity-to-depth calibration for plenoptic imaging systems JP6968895B2|2021-11-17|Method and optical system to acquire tomographic distribution of electromagnetic field crest Rodríguez-Ramos et al.2012|Atmospherical wavefront phases using the plenoptic sensor | Schmalz2012|Robust single-shot structured light 3D scanning Hahne2016|The standard plenoptic camera: applications of a geometrical light field model Rangarajan2014|Pushing the limits of imaging using patterned illumination Trujillo-Sevilla et al.2014|Tomographic wavefront retrieval by combined use of geometric and plenoptic sensors Liu et al.2016|4D phase-space multiplexing for fluorescent microscopy Alonso2016|Multi-focus computational optical Imaging in Fourier Domain Kwan et al.2017|Development of a Light Field Laparoscope for Depth Reconstruction Sinharoy2016|Scheimpflug with computational imaging to extend the depth of field of iris recognition systems CN112013973A|2020-12-01|Fibonacci photon sieve based variable shear ratio four-wave shearing interferometer
同族专利:
公开号 | 公开日 JP6600360B2|2019-10-30| TWI687661B|2020-03-11| ES2578356B1|2017-08-04| EP3239672A4|2018-09-19| IL253142D0|2017-08-31| JP2018512070A|2018-05-10| US10230940B2|2019-03-12| IL253142A|2021-07-29| TW201643391A|2016-12-16| KR20170099959A|2017-09-01| CN107209061B|2020-04-14| WO2016102731A1|2016-06-30| DK3239672T3|2020-04-14| EP3239672B1|2020-01-15| CN107209061A|2017-09-26| EP3239672A1|2017-11-01| US20180007342A1|2018-01-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US8542421B2|2006-11-17|2013-09-24|Celloptic, Inc.|System, apparatus and method for extracting three-dimensional information of an object from received electromagnetic radiation| US8405890B2|2007-01-29|2013-03-26|Celloptic, Inc.|System, apparatus and method for extracting image cross-sections of an object from received electromagnetic radiation| ES2372515B2|2008-01-15|2012-10-16|Universidad De La Laguna|CHAMBER FOR THE REAL-TIME ACQUISITION OF THE VISUAL INFORMATION OF THREE-DIMENSIONAL SCENES.| CN101939703B|2008-12-25|2011-08-31|深圳市泛彩溢实业有限公司|Hologram three-dimensional image information collecting device and method, reproduction device and method| JP2010169976A|2009-01-23|2010-08-05|Sony Corp|Spatial image display| TW201137923A|2010-02-10|2011-11-01|Halcyon Molecular Inc|Aberration-correcting dark-field electron microscopy| CN102662238B|2012-05-03|2014-01-15|中国科学院长春光学精密机械与物理研究所|Space optical camera having on-orbit self-diagnosis and compensation functions|AU2017202910A1|2017-05-02|2018-11-22|Canon Kabushiki Kaisha|Image processing for turbulence compensation| US10565085B2|2018-06-06|2020-02-18|Sas Institute, Inc.|Two-stage distributed estimation system|
法律状态:
2017-08-04| FG2A| Definitive protection|Ref document number: 2578356 Country of ref document: ES Kind code of ref document: B1 Effective date: 20170804 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201431900A|ES2578356B1|2014-12-22|2014-12-22|METHOD FOR DETERMINING THE COMPLEX WIDTH OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE|ES201431900A| ES2578356B1|2014-12-22|2014-12-22|METHOD FOR DETERMINING THE COMPLEX WIDTH OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE| PCT/ES2015/070936| WO2016102731A1|2014-12-22|2015-12-21|Method for determining the complex amplitude of the electromagnetic field associated with a scene| US15/538,849| US10230940B2|2014-12-22|2015-12-21|Method for determining the complex amplitude of the electromagnetic field associated with a scene| JP2017533821A| JP6600360B2|2014-12-22|2015-12-21|A method for determining the complex amplitude of an electromagnetic field associated with a scene| DK15872012.8T| DK3239672T3|2014-12-22|2015-12-21|Method for Determining the Complex Amplitude of the Electromagnetic Field Associated with a Stage| CN201580072973.9A| CN107209061B|2014-12-22|2015-12-21|Method for determining complex amplitude of scene-dependent electromagnetic field| EP15872012.8A| EP3239672B1|2014-12-22|2015-12-21|Method for determining the complex amplitude of the electromagnetic field associated with a scene| KR1020177020020A| KR20170099959A|2014-12-22|2015-12-21|Method for determining the complex amplitude of the electromagnetic field associated with a scene| TW104143159A| TWI687661B|2014-12-22|2015-12-22|Method and device for determining the complex amplitude of the electromagnetic field associated to a scene| IL253142A| IL253142A|2014-12-22|2017-06-22|Method for determining the complex amplitude of the electromagnetic field associated with a scene| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|